14 research outputs found

    Data pruning and neural scaling laws: fundamental limitations of score-based algorithms

    Full text link
    Data pruning algorithms are commonly used to reduce the memory and computational cost of the optimization process. Recent empirical results reveal that random data pruning remains a strong baseline and outperforms most existing data pruning methods in the high compression regime, i.e., where a fraction of 30%30\% or less of the data is kept. This regime has recently attracted a lot of interest as a result of the role of data pruning in improving the so-called neural scaling laws; in [Sorscher et al.], the authors showed the need for high-quality data pruning algorithms in order to beat the sample power law. In this work, we focus on score-based data pruning algorithms and show theoretically and empirically why such algorithms fail in the high compression regime. We demonstrate ``No Free Lunch" theorems for data pruning and present calibration protocols that enhance the performance of existing pruning algorithms in this high compression regime using randomization

    The Normal-Generalised Gamma-Pareto process: A novel pure-jump L\'evy process with flexible tail and jump-activity properties

    Full text link
    Pure-jump L\'evy processes are popular classes of stochastic processes which have found many applications in finance, statistics or machine learning. In this paper, we propose a novel family of self-decomposable L\'evy processes where one can control separately the tail behavior and the jump activity of the process, via two different parameters. Crucially, we show that one can sample exactly increments of this process, at any time scale; this allows the implementation of likelihood-free Markov chain Monte Carlo algorithms for (asymptotically) exact posterior inference. We use this novel process in L\'evy-based stochastic volatility models to predict the returns of stock market data, and show that the proposed class of models leads to superior predictive performances compared to classical alternatives

    An Information Theoretic approach to Post Randomization Methods under Differential Privacy

    Get PDF
    Post Randomization Methods (PRAM) are among the most popular disclosure limitation techniques for both categorical and continuous data. In the categorical case, given a stochastic matrix M and a specified variable, an individual belonging to category i is changed to category j with probability Mi,j . Every approach to choose the randomization matrix M has to balance between two desiderata: 1) preserving as much statistical information from the raw data as possible; 2) guaranteeing the privacy of individuals in the dataset. This trade-off has generally been shown to be very challenging to solve. In this work, we use recent tools from the computer science literature and propose to choose M as the solution of a constrained maximization problems. Specifically, M is chosen as the solution of a constrained maximization problem, where we maximize the Mutual Information between raw and transformed data, given the constraint that the transformation satisfies the notion of Differential Privacy. For the general Categorical model, it is shown how this maximization problem reduces to a convex linear programming and can be therefore solved with known optimization algorithms

    An Optimization Framework For Anomaly Detection Scores Refinement With Side Information

    Full text link
    This paper considers an anomaly detection problem in which a detection algorithm assigns anomaly scores to multi-dimensional data points, such as cellular networks' Key Performance Indicators (KPIs). We propose an optimization framework to refine these anomaly scores by leveraging side information in the form of a causality graph between the various features of the data points. The refinement block builds on causality theory and a proposed notion of confidence scores. After motivating our framework, smoothness properties are proved for the ensuing mathematical expressions. Next, equipped with these results, a gradient descent algorithm is proposed, and a proof of its convergence to a stationary point is provided. Our results hold (i) for any causal anomaly detection algorithm and (ii) for any side information in the form of a directed acyclic graph. Numerical results are provided to illustrate the advantage of our proposed framework in dealing with False Positives (FPs) and False Negatives (FNs). Additionally, the effect of the graph's structure on the expected performance advantage and the various trade-offs that take place are analyzed

    Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility

    Full text link
    This article studies the infinite-width limit of deep feedforward neural networks whose weights are dependent, and modelled via a mixture of Gaussian distributions. Each hidden node of the network is assigned a nonnegative random variable that controls the variance of the outgoing weights of that node. We make minimal assumptions on these per-node random variables: they are iid and their sum, in each layer, converges to some finite random variable in the infinite-width limit. Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L\'evy measure on the positive reals. If the scalar parameters are strictly positive and the L\'evy measures are trivial at all hidden layers, then one recovers the classical Gaussian process (GP) limit, obtained with iid Gaussian weights. More interestingly, if the L\'evy measure of at least one layer is non-trivial, we obtain a mixture of Gaussian processes (MoGP) in the large-width limit. The behaviour of the neural network in this regime is very different from the GP regime. One obtains correlated outputs, with non-Gaussian distributions, possibly with heavy tails. Additionally, we show that, in this regime, the weights are compressible, and some nodes have asymptotically non-negligible contributions, therefore representing important hidden features. Many sparsity-promoting neural network models can be recast as special cases of our approach, and we discuss their infinite-width limits; we also present an asymptotic analysis of the pruning error. We illustrate some of the benefits of the MoGP regime over the GP regime in terms of representation learning and compressibility on simulated, MNIST and Fashion MNIST datasets.Comment: 96 pages, 15 figures, 9 table

    Consistent estimation of small masses in feature sampling

    Get PDF
    Consider an (observable) random sample of size n from an infinite population of individuals, each individual being endowed with a finite set of features from a collection of features (Fj)j≥1 with unknown probabilities (pj)j≥1, i.e., pj is the probability that an individual displays feature Fj. Under this feature sampling framework, in recent years there has been a growing interest in estimating the sum of the probability masses pj's of features observed with frequency r≥0 in the sample, here denoted by Mn,r. This is the natural feature sampling counterpart of the classical problem of estimating small probabilities in the species sampling framework, where each individual is endowed with only one feature (or “species"). In this paper we study the problem of consistent estimation of the small mass Mn,r. We first show that there do not exist universally consistent estimators, in the multiplicative sense, of the missing mass Mn,0. Then, we introduce an estimator of Mn,r and identify sufficient conditions under which the estimator is consistent. In particular, we propose a nonparametric estimator M^n,r of Mn,r which has the same analytic form of the celebrated Good--Turing estimator for small probabilities, with the sole difference that the two estimators have different ranges (supports). Then, we show that M^n,r is strongly consistent, in the multiplicative sense, under the assumption that (pj)j≥1 has regularly varying heavy tails
    corecore